This page last changed on Feb 03, 2009 by straha1.

Table of Contents

Choosing a Language

All MPI implementations on HPC have bindings for the following languages:

  • C
  • C++
  • Fortran 77
  • Fortran 90

The four languages have support for roughly the same set of MPI features. The C++ bindings include both object-oriented and non-object oriented bindings and make use of C++ namespaces. The Fortran 90 bindings are merely a superset of the Fortran 77 bindings, with added support for complex numbers and other features of Fortran 90. More details about language differences are available at the website for the MPI standard (note though that we only have MPI 1 & 2 implementations – not 2.1).

Both available compilers support C, C++, Fortran 77, 90 and 95. Since the MPI implementations also support all four languages, this gives you the freedom to pick the one that is best for the task. Keep in mind that combining Fortran and C++ code can be somewhat complicated (but is doable). Combining C and Fortran is pretty straightforward.

Choosing a Compiler

HPC currently has two compiler suites:

  • the Portland Group (PGI) compiler suite version 7.1-6,
  • and the GNU Compiler Collection (GCC) version 4.1.2

Both compilers support C, C++, Fortran 77, 90 and 95. They both support all three MPI implementations available on HPC. Significant differences between the two include the following:

  • The PGI compiler is generally better at optimizing than GCC.
  • PGI has better support for Fortran 90 and 95 and supports some of the Fortran 2003 standard.
  • PGI has built-in auto-parallelization and OpenMP support.
  • GCC's Fortran 90 and 95 support is very new and not as complete or well-tested as that of PGI's Fortran compiler.
  • GCC has better support for C++.
  • GCC contains compilers for Objective C, Ada and Java, but those are not currently installed.

If you are using Fortran 90 or 95, it is probably best to use the PGI compiler until GCC's Fortran support matures more. Also, if you need to use several languages in the same program, it is far easier to stick with one compiler.

Further details about both compiler suites can be found here:

Choosing an MPI Implementation

The MPI implementations available on HPC are:

  • OpenMPI 1.2.6 – an MPI 1 & 2 implementation
  • MVAPICH 1.0.1 – an Infiniband-optimized MPI 1 implementation based on MPICH (avoid this MPI implementation due to the bug mentioned above)
  • MVAPICH2 1.0.3 – an MPI 1 & 2 version of MVAPICH

Preliminary benchmarks on our cluster indicate that OpenMPI is slower than MVAPICH 1 & 2 in many situations, sometimes much slower. MVAPICH and MVAPICH2 are faster for many, but not all, MPI calls and ranges of message sizes; OpenMPI is faster in some situations. See technical report HPCF-2008-6, MPI Performance on the hpc.rs.umbc.edu Cluster, (PDF) for details. That report was made before we upgraded to OFED 1.3.1, which upgraded MVAPICH2, MVAPICH, OpenMPI and the underlying libraries and drivers. Thus it may not still be valid.

You can find more information about these MPI implementations on their websites:

Document generated by Confluence on Mar 31, 2011 15:37